27 research outputs found

    Electronic control of elastomeric microfluidic circuits with shape memory actuators

    Get PDF
    Recently, sophisticated fluidic circuits with hundreds of independent valves have been built by using multi-layer soft-lithography to mold elastomers. However, this shrinking of microfluidic circuits has not been matched by a corresponding miniaturization of the actuation and interfacing elements that control the circuits; while the fluidic circuits are small (~10–100 micron wide channels), the Medusa's head-like interface, consisting of external pneumatic solenoids and tubing or mechanical pins to control each independent valve, is larger by one to four orders of magnitude (mm to cm). Consequently, the dream of using large scale integration in microfluidics for portable, high throughput applications has been stymied. By combining multi-layer soft-lithography with shape memory alloys (SMA), we demonstrate electronically activated microfluidic components such as valves, pumps, latches and multiplexers, that are assembled on printed circuit boards (PCBs). Thus, high density, electronically controlled microfluidic chips can be integrated alongside standard opto-electronic components on a PCB. Furthermore, we introduce the idea of microfluidic states, which are combinations of valve states, and analogous to instruction sets of integrated circuit (IC) microprocessors. Microfluidic states may be represented in hardware or software, and we propose a control architecture that results in logarithmic reduction of external control lines. These developments bring us closer to building microfluidic circuits that resemble electronic ICs both physically, as well as in their abstract model

    Context Aware Road-user Importance Estimation (iCARE)

    Full text link
    Road-users are a critical part of decision-making for both self-driving cars and driver assistance systems. Some road-users, however, are more important for decision-making than others because of their respective intentions, ego vehicle's intention and their effects on each other. In this paper, we propose a novel architecture for road-user importance estimation which takes advantage of the local and global context of the scene. For local context, the model exploits the appearance of the road users (which captures orientation, intention, etc.) and their location relative to ego-vehicle. The global context in our model is defined based on the feature map of the convolutional layer of the module which predicts the future path of the ego-vehicle and contains rich global information of the scene (e.g., infrastructure, road lanes, etc.), as well as the ego vehicle's intention information. Moreover, this paper introduces a new data set of real-world driving, concentrated around inter-sections and includes annotations of important road users. Systematic evaluations of our proposed method against several baselines show promising results.Comment: Published in: IEEE Intelligent Vehicles (IV), 201

    Attention estimation by simultaneous analysis of viewer and view

    Get PDF
    Abstract — This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the driver’s gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time. I

    Understanding head and hand activities and coordination in naturalistic driving videos

    Full text link
    Abstract — In this work, we propose a vision-based analysis framework for recognizing in-vehicle activities such as interac-tions with the steering wheel, the instrument cluster and the gear. The framework leverages two views for activity analysis, a camera looking at the driver’s hand and another looking at the driver’s head. The techniques proposed can be used by researchers in order to extract ‘mid-level ’ information from video, which is information that represents some semantic understanding of the scene but may still require an expert in order to distinguish difficult cases or leverage the cues to perform drive analysis. Unlike such information, ’low-level’ video is large in quantity and can’t be used unless processed entirely by an expert. This work can apply to minimizing manual labor so that researchers may better benefit from the accessibility of the data and provide them with the ability to perform larger-scaled studies. I

    Vision on Wheels: Looking at Driver, Vehicle, and Surround for On-Road Maneuver Analysis

    Full text link
    Automotive systems provide a unique opportunity for mobile vision technologies to improve road safety by un-derstanding and monitoring the driver. In this work, we propose a real-time framework for early detection of driver maneuvers. The implications of this study would allow for better behavior prediction, and therefore the development of more efficient advanced driver assistance and warning systems. Cues are extracted from an array of sensors ob-serving the driver (head, hand, and foot), the environment (lane and surrounding vehicles), and the ego-vehicle state (speed, steering angle, etc.). Evaluation is performed on a real-world dataset with overtaking maneuvers, showing promising results. In order to gain better insight into the processes that characterize driver behavior, temporally dis-criminative cues are studied and visualized. 1

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
    corecore